10 research outputs found

    Biometric Systems and Their Applications

    Get PDF
    Nowadays, we are talking more and more about insecurity in various sectors as well as the computer techniques to be implemented to counter this trend: access control to computers, e-commerce, banking, etc. There are two traditional ways of identifying an individual. The first method is a knowledge-based method. It is based on the knowledge of an individual’s information such as the PIN code to allow him/her to activate a mobile phone. The second method is based on the possession of token. It can be a piece of identification, a key, a badge, etc. These two methods of identification can be used in a complementary way to obtain increased security like in bank cards. However, they each have their weaknesses. In the first case, the password can be forgotten or guessed by a third party. In the second case, the badge (or ID or key) may be lost or stolen. Biometric features are an alternative solution to the two previous identification modes. The advantage of using the biometric features is that they are all universal, measurable, unique, and permanent. The interest of applications using biometrics can be summed up in two classes: to facilitate the way of life and to avoid fraud

    An improvement and a fast DSP implementation of the bit flipping algorithms for low density parity check decoder

    Get PDF
    For low density parity check (LDPC) decoding, hard-decision algorithms are sometimes more suitable than the soft-decision ones. Particularly in the high throughput and high speed applications. However, there exists a considerable gap in performances between these two classes of algorithms in favor of soft-decision algorithms.  In order to reduce this gap, in this work we introduce two new improved versions of the hard-decision algorithms, the adaptative gradient descent bit-flipping (AGDBF) and adaptative reliability ratio weighted GDBF (ARRWGDBF).  An adaptative weighting and correction factor is introduced in each case to improve the performances of the two algorithms allowing an important gain of bit error rate. As a second contribution of this work a real time implementation of the proposed solutions on a digital signal processors (DSP) is performed in order to optimize and improve the performance of these new approchs. The results of numerical simulations and DSP implementation reveal a faster convergence with a low processing time and a reduction in consumed memory resources when compared to soft-decision algorithms. For the irregular LDPC code, our approachs achieves gains of 0.25 and 0.15 dB respectively for the AGDBF and ARRWGDBF algorithms

    A Comparative Study of Multiple Object Detection Using Haar-Like Feature Selection and Local Binary Patterns in Several Platforms

    Get PDF
    Object detection has been attracting much interest due to the wide spectrum of applications that use it. It has been driven by an increasing processing power available in software and hardware platforms. In this work we present a developed application for multiple objects detection based on OpenCV libraries. The complexity-related aspects that were considered in the object detection using cascade classifier are described. Furthermore, we discuss the profiling and porting of the application into an embedded platform and compare the results with those obtained on traditional platforms. The proposed application deals with real-time systems implementation and the results give a metric able to select where the cases of object detection applications may be more complex and where it may be simpler

    Analysis of HEVC Video Encoder Using ARM Cortex –A8 with NEON Technology

    Get PDF
    This work presents an implementation of the last software version of video processing the High Efficiency Video Coding (HEVC) encoder in architecture of mobile processors single low cost processor ARM Cortex A8 using on NEON architecture which is a Single Input Multiple Data (SIMD). By using an optimization using this technology the execution time was highly accelerated

    On the ways of the HEVC/H265 standard

    Get PDF
    High Efficiency Video Coding (HEVC) is the new video compression standard developed by a Joint Collaborative Team of ISO/IEC MPEG and ITU-T VCEG. Standardized in January 2013, HEVC was designed to offer significantly improved compression performances relative to precedent standards by using several new tools for improving coding efficiency. This paper gives an overview of HEVC standard and makes a comparison between two implementations of the HM 12.0 reference software

    Machine Learning Based Fast QTMTT Partitioning Strategy for VVenC Encoder in Intra Coding

    No full text
    International audienceThe newest video compression standard, Versatile Video Coding (VVC), was finalized in July 2020 by the Joint Video Experts Team (JVET). Its main goal is to reduce the bitrate by 50% over its predecessor video coding standard, the High Efficiency Video Coding (HEVC). Due to the new advanced tools and features included in VVC, it actually provides high coding performances-for instance, the Quad Tree with nested Multi-Type Tree (QTMTT) involved in the partitioning block. Furthermore, VVC introduces various techniques that allow for superior performance compared to HEVC, but with an increase in the computational complexity. To tackle this complexity, a fast Coding Unit partition algorithm based on machine learning for the intra configuration in VVC is proposed in this work. The proposed algorithm is formed by five binary Light Gradient Boosting Machine (LightGBM) classifiers, which can directly predict the most probable split mode for each coding unit without passing through the exhaustive process known as Rate Distortion Optimization (RDO). These LightGBM classifiers were offline trained on a large dataset; then, they were embedded on the optimized implementation of VVC known as VVenC. The results of our experiment show that our proposed approach has good trade-offs in terms of time-saving and coding efficiency. Depending on the preset chosen, our approach achieves an average time savings of 30.21% to 82.46% compared to the VVenC encoder anchor, and a Bjontegaard Delta Bitrate (BDBR) increase of 0.67% to 3.01%, respectively

    Area & Power Efficient VLSI Architecture of Mode Decision in Integer Motion Estimation for HEVC Video Coding Standard

    No full text
    In this paper, we propose a new parallel hardware architecture for the mode decision algorithm, that it is based on the Sum Absolute of the Difference (SAD) for compute the motion estimation, which is the most critical algorithm in the recent video encoding standard HEVC. In fact, this standard introduced new large variable block sizes for the motion estimation algorithm and therefore the SAD requires a more reduced execution time in order to achieve the real time processing even for the ultra-high resolution sequences. The proposed accelerator executes the SAD algorithm in a parallel way for all sub-block prediction units (PUs) and coding unit (CU) whatever their sizes, which turns in a huge improvements in the performances, given that all the block sizes, PUs in each CU, are supported and processed in the same time. The Xilinx Artix-7 (Zynq-7000) FPGA is used for the prototyping and the synthesis of the proposed accelerator. The mode decision for motion estimation scheme is implemented with 32K LUTs, 50K registers and 108Kb BRAMs. The implementation results show that our hardware architecture can achieve 30 frames per second of the 4K (3840 Ă— 2160) resolutions in real time processing at 115.15MHz
    corecore